131 research outputs found

    The Recommendation Architecture: Lessons from Large-Scale Electronic Systems Applied to Cognition

    Get PDF
    A fundamental approach of cognitive science is to understand cognitive systems by separating them into modules. Theoretical reasons are described which force any system which learns to perform a complex combination of real time functions into a modular architecture. Constraints on the way modules divide up functionality are also described. The architecture of such systems, including biological systems, is constrained into a form called the recommendation architecture, with a primary separation between clustering and competition. Clustering is a modular hierarchy which manages the interactions between functions on the basis of detection of functionally ambiguous repetition. Change to previously detected repetitions is limited in order to maintain a meaningful, although partially ambiguous context for all modules which make use of the previously defined repetitions. Competition interprets the repetition conditions detected by clustering as a range of alternative behavioural recommendations, and uses consequence feedback to learn to select the most appropriate recommendation. The requirements imposed by functional complexity result in very specific structures and processes which resemble those of brains. The design of an implemented electronic version of the recommendation architecture is described, and it is demonstrated that the system can heuristically define its own functionality, and learn without disrupting earlier learning. The recommendation architecture is compared with a range of alternative cognitive architectural proposals, and the conclusion reached that it has substantial potential both for understanding brains and for designing systems to perform cognitive functions

    A Functional Architecture Approach to Neural Systems

    Get PDF
    The technology for the design of systems to perform extremely complex combinations of real-time functionality has developed over a long period. This technology is based on the use of a hardware architecture with a physical separation into memory and processing, and a software architecture which divides functionality into a disciplined hierarchy of software components which exchange unambiguous information. This technology experiences difficulty in design of systems to perform parallel processing, and extreme difficulty in design of systems which can heuristically change their own functionality. These limitations derive from the approach to information exchange between functional components. A design approach in which functional components can exchange ambiguous information leads to systems with the recommendation architecture which are less subject to these limitations. Biological brains have been constrained by natural pressures to adopt functional architectures with this different information exchange approach. Neural networks have not made a complete shift to use of ambiguous information, and do not address adequate management of context for ambiguous information exchange between modules. As a result such networks cannot be scaled to complex functionality. Simulations of systems with the recommendation architecture demonstrate the capability to heuristically organize to perform complex functionality

    A physiologically based approach to consciousness

    Get PDF
    The nature of a scientific theory of consciousness is defined by comparison with scientific theories in the physical sciences. The differences between physical, algorithmic and functional complexity are highlighted, and the architecture of a functionally complex electronic system created to relate system operations to device operations is compared with a scientific theory. It is argued that there are two qualitatively different types of functional architecture, and that electronic systems have the instruction architecture based on exchange of unambiguous information between functional components, and biological brains have been constrained by natural selection pressures into the recommendation architecture based on exchange of ambiguous information. The mechanisms by which a recommendation architecture could heuristically define its own functionality are described, and compared with memory in biological brains. Dream sleep is interpreted as the mechanism for minimizing information exchange between functional components in a heuristically defined functional system. The functional role of consciousness of self is discussed, and the route by which the experience of that function described at the psychological level can be related to physiology through a functional architecture is outlined

    A Physiologically Based System Theory of Consciousness

    Get PDF
    A system which uses large numbers of devices to perform a complex functionality is forced to adopt a simple functional architecture by the needs to construct copies of, repair, and modify the system. A simple functional architecture means that functionality is partitioned into relatively equal sized components on many levels of detail down to device level, a mapping exists between the different levels, and exchange of information between components is minimized. In the instruction architecture functionality is partitioned on every level into instructions, which exchange unambiguous system information and therefore output system commands. The von Neumann architecture is a special case of the instruction architecture in which instructions are coded as unambiguous system information. In the recommendation (or pattern extraction) architecture functionality is partitioned on every level into repetition elements, which can freely exchange ambiguous information and therefore output only system action recommendations which must compete for control of system behavior. Partitioning is optimized to the best tradeoff between even partitioning and minimum cost of distributing data. Natural pressures deriving from the need to construct copies under DNA control, recover from errors, failures and damage, and add new functionality derived from random mutations has resulted in biological brains being constrained to adopt the recommendation architecture. The resultant hierarchy of functional separations can be the basis for understanding psychological phenomena in terms of physiology. A theory of consciousness is described based on the recommendation architecture model for biological brains. Consciousness is defined at a high level in terms of sensory independent image sequences including self images with the role of extending the search of records of individual experience for behavioral guidance in complex social situations. Functional components of this definition of consciousness are developed, and it is demonstrated that these components can be translated through subcomponents to descriptions in terms of known and postulated physiological mechanisms

    Using the Change Manager Model for the Hippocampal System to Predict Connectivity and Neurophysiological Parameters in the Perirhinal Cortex

    Get PDF
    Theoretical arguments demonstrate that practical considerations, including the needs to limit physiological resources and to learn without interference with prior learning, severely constrain the anatomical architecture of the brain. These arguments identify the hippocampal system as the change manager for the cortex, with the role of selecting the most appropriate locations for cortical receptive field changes at each point in time and driving those changes. This role results in the hippocampal system recording the identities of groups of cortical receptive fields that changed at the same time. These types of records can also be used to reactivate the receptive fields active during individual unique past events, providing mechanisms for episodic memory retrieval. Our theoretical arguments identify the perirhinal cortex as one important focal point both for driving changes and for recording and retrieving episodic memories. The retrieval of episodic memories must not drive unnecessary receptive field changes, and this consideration places strong constraints on neuron properties and connectivity within and between the perirhinal cortex and regular cortex. Hence the model predicts a number of such properties and connectivity. Experimental test of these falsifiable predictions would clarify how change is managed in the cortex and how episodic memories are retrieved

    Seasonal variability of the warm Atlantic Water layer in the vicinity of the Greenland shelf break

    Get PDF
    The warmest water reaching the east and west coast of Greenland is found between 200?m and 600?m. Whilst important for melting Greenland's outlet glaciers, limited winter observations of this layer prohibit determination of its seasonality. To address this, temperature data from Argo profiling floats, a range of sources within the World Ocean Database and unprecedented coverage from marine-mammal borne sensors have been analysed for the period 2002-2011. A significant seasonal range in temperature (~1-2?°C) is found in the warm layer, in contrast to most of the surrounding ocean. The phase of the seasonal cycle exhibits considerable spatial variability, with the warmest water found near the eastern and southwestern shelf-break towards the end of the calendar year. High-resolution ocean model trajectory analysis suggest the timing of the arrival of the year's warmest water is a function of advection time from the subduction site in the Irminger Basin

    Length of carotid stenosis predicts peri-procedural stroke or death and restenosis in patients randomized to endovascular treatment or endarterectomy.

    Get PDF
    BACKGROUND: The anatomy of carotid stenosis may influence the outcome of endovascular treatment or carotid endarterectomy. Whether anatomy favors one treatment over the other in terms of safety or efficacy has not been investigated in randomized trials. METHODS: In 414 patients with mostly symptomatic carotid stenosis randomized to endovascular treatment (angioplasty or stenting; n = 213) or carotid endarterectomy (n = 211) in the Carotid and Vertebral Artery Transluminal Angioplasty Study (CAVATAS), the degree and length of stenosis and plaque surface irregularity were assessed on baseline intraarterial angiography. Outcome measures were stroke or death occurring between randomization and 30 days after treatment, and ipsilateral stroke and restenosis ≥50% during follow-up. RESULTS: Carotid stenosis longer than 0.65 times the common carotid artery diameter was associated with increased risk of peri-procedural stroke or death after both endovascular treatment [odds ratio 2.79 (1.17-6.65), P = 0.02] and carotid endarterectomy [2.43 (1.03-5.73), P = 0.04], and with increased long-term risk of restenosis in endovascular treatment [hazard ratio 1.68 (1.12-2.53), P = 0.01]. The excess in restenosis after endovascular treatment compared with carotid endarterectomy was significantly greater in patients with long stenosis than with short stenosis at baseline (interaction P = 0.003). Results remained significant after multivariate adjustment. No associations were found for degree of stenosis and plaque surface. CONCLUSIONS: Increasing stenosis length is an independent risk factor for peri-procedural stroke or death in endovascular treatment and carotid endarterectomy, without favoring one treatment over the other. However, the excess restenosis rate after endovascular treatment compared with carotid endarterectomy increases with longer stenosis at baseline. Stenosis length merits further investigation in carotid revascularisation trials

    An assessment of the Arctic Ocean in a suite of interannual CORE-II simulations. Part III: Hydrography and fluxes

    Get PDF
    In this paper we compare the simulated Arctic Ocean in 15 global ocean–sea ice models in the framework of the Coordinated Ocean-ice Reference Experiments, phase II (CORE-II). Most of these models are the ocean and sea-ice components of the coupled climate models used in the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiments. We mainly focus on the hydrography of the Arctic interior, the state of Atlantic Water layer and heat and volume transports at the gateways of the Davis Strait, the Bering Strait, the Fram Strait and the Barents Sea Opening. We found that there is a large spread in temperature in the Arctic Ocean between the models, and generally large differences compared to the observed temperature at intermediate depths. Warm bias models have a strong temperature anomaly of inflow of the Atlantic Water entering the Arctic Ocean through the Fram Strait. Another process that is not represented accurately in the CORE-II models is the formation of cold and dense water, originating on the eastern shelves. In the cold bias models, excessive cold water forms in the Barents Sea and spreads into the Arctic Ocean through the St. Anna Through. There is a large spread in the simulated mean heat and volume transports through the Fram Strait and the Barents Sea Opening. The models agree more on the decadal variability, to a large degree dictated by the common atmospheric forcing. We conclude that the CORE-II model study helps us to understand the crucial biases in the Arctic Ocean. The current coarse resolution state-of-the-art ocean models need to be improved in accurate representation of the Atlantic Water inflow into the Arctic and density currents coming from the shelves

    Prospects for observing and localizing gravitational-wave transients with advanced LIGO and advanced virgo

    Get PDF
    We present a possible observing scenario for the Advanced LIGO and Advanced Virgo gravitational-wave detectors over the next decade, with the intention of providing information to the astronomy community to facilitate planning for multi-messenger astronomy with gravitational waves. We determine the expected sensitivity of the network to transient gravitational-wave signals, and study the capability of the network to determine the sky location of the source. We report our findings for gravitational-wave transients, with particular focus on gravitational-wave signals from the inspiral of binary neutron-star systems, which are considered the most promising for multi-messenger astronomy. The ability to localize the sources of the detected signals depends on the geographical distribution of the detectors and their relative sensitivity, and 90% credible regions can be as large as thousands of square degrees when only two sensitive detectors are operational. Determining the sky position of a significant fraction of detected signals to areas of 5 deg^2 to 20 deg^2 will require at least three detectors of sensitivity within a factor of ~2 of each other and with a broad frequency bandwidth. Should the third LIGO detector be relocated to India as expected, a significant fraction of gravitational-wave signals will be localized to a few square degrees by gravitational-wave observations alone
    corecore